Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness, is a field related to artificial intelligence and cognitive robotics whose aim is to define that which would have to be synthesized were consciousness to be found in an engineered artifact (Aleksander 1995).
Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of AC believe it is possible to construct machines (e.g., computer systems) that can emulate this NCC interoperation.
Contents |
As there are many designations of consciousness, there are many potential types of AC. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that are amenable to a functional description, while phenomenal consciousness concerns those aspects of experience that seem to defy functional depiction, instead being characterized qualitatively in terms of “raw feels”, “what it is like” or qualia (Block, 1997). Weaker versions of AC only require that functional, “access consciousness” be artificially instantiated. Stronger versions of AC hold that all aspects of consciousness are instantiable in artificial, machine systems.
For example, when the visual cortex of the brain processes neural impulses from the eyes and determines that the image consists of a spherical object in a rectangular box, this is access consciousness and is not philosophically difficult, because such pattern recognition has been simulated by computer programs. But how to emulate phenomena such as pain, or anger, or motivation, or attention, or feeling of relevance, or modeling other people's intentions, or anticipating consequences of alternative actions, or inventing or rediscovering new concepts or tools or procedures without reading about them or being taught?
There is considerable debate over the plausibility of AC. Some theorists (e.g., biological naturalists) hold that consciousness can only be instantiated in biological systems (Searle, 2004). A slightly more liberal view, though one still skeptical of AC, is held by theorists (e.g., type-identity theorists) who hold that consciousness can only be realized in particular physical systems because consciousness has properties that necessarily depend on physical constitution (Block, 1978; Bickle, 2003). Their skepticism rests on the fact that there are many physical asymmetries between natural, organic systems and artificially constructed (e.g., computer) systems, and it seems reasonable to think these differences would be relevant to the generation of conscious states.
However, other theorists are more sanguine about the plausibility of AC. For some theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness (Putnam, 1967). Along these lines, some theorists (e.g., David Chalmers) have proposed that consciousness can be realized in properly designed and programmed computers.
One of the most explicit arguments for the plausibility of AC comes from David Chalmers. His proposal, found within his manuscript A Computational Foundation for the Study of Cognition, is roughly that computers perform computations and the right kinds of computations are sufficient for the possession of a conscious mind. In outline, he defends his claim thus: Computers perform computations. Computations can capture other systems’ abstract causal organization. Mental properties are nothing over and above abstract causal organization. Therefore, computers running the right kind of computations will instantiate mental properties.
The most controversial part of Chalmers’ proposal is that mental properties are “organizationally invariant;” i.e., nothing over and above abstract causal organization. His rough argument for which is the following. Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are “characterized by their causal role” within an overall causal system. He adverts to the work of Armstrong (1968) and Lewis (1972) in claiming that “[s]ystems with the same causal topology…will share their psychological properties.” Phenomenological properties, on the other hand, are not prima facie definable in terms of their causal roles. Establishing that phenomenological properties are amenable to individuation by causal role therefore requires argument. Chalmers provides his “Dancing Qualia Argument” for this purpose.
Chalmers begins by assuming that agents with identical causal organizations could have different experiences in virtue of having different material constitutions (silicon vs. neurons, e.g.). He then asks us to conceive of changing one agent into the other by the replacement of parts (neural parts replaced by silicon, say) while preserving its causal organization. Ex hypothesi, the experience of the agent under transformation would change (as the parts were replaced), but there would be no change in causal topology and therefore no means whereby the agent could “notice” the shift in experience. To imagine, however, that it makes sense to say an agent could have qualitative changes in experience but be unable to notice changes in experience is absurd. Given the absurdity of the conclusion, Chalmers’ rejects the initial premise that agents with identical causal organization can have different experiences. Thus, by his Dancing Qualia Argument, Chalmers defends his view that phenomenological properties are organizationally invariant.
Critics of AC can object that Chalmers begs the question in assuming that all mental properties are sufficiently captured by abstract causal organization. It remains a contentious proposal. Regardless, it is important for laying a philosophical foundation for the plausibility of AC.
If it was certain that a particular machine was conscious its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example a conscious computer that was owned and used as a tool or central computer of a building or large machine is a particular ambiguity. Should laws be made for such a case, consciousness would also require a legal definition (for example a machine's ability to experience pleasure or pain, known as sentience). Because artificial consciousness is still largely a theoretical subject such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction (see below).
The rules for the 2003 Loebner Prize competition explicitly addressed the question of robot rights:
61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[1]
There are various aspects of consciousness generally deemed necessary for a machine to be artificially conscious. A variety of functions in which consciousness plays a role were suggested by Bernard Baars (Baars 1988) and others. The functions of consciousness suggested by Bernard Baars are Definition and Context Setting, Adaptation and Learning, Editing, Flagging and Debugging, Recruiting and Control, Prioritizing and Access-Control, Decision-making or Executive Function, Analogy-forming Function, Metacognitive and Self-monitoring Function, Autoprogramming and Self-maintenance Function, and Definitional and Context-setting Function. Igor Aleksander suggested 12 principles for Artificial Consciousness (Aleksander 1995) and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.
Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such model creating includes modeling of the physical world, modeling of one's own internal states and processes, and modeling of other conscious entities.
Learning is also considered necessary for AC. By Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events (Baars 1988). By Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments" (Cleeremans 2001).
The ability to predict (or anticipate) foreseeable events is considered important for AC by Igor Aleksander.[2] The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.
Relationships between real world states are mirrored in the state structure of a conscious organism enabling the organism to predict events.[3] An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take premptive action to avert anticipated events. The implication here is that the machine needs real-time components, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not just in the past. In order to do this, a conscious machine must make coherent predictions in changing, novel environments, not only in worlds with fixed rules like a chess board, to simulate and control the real world.
Subjective experiences or qualia are widely considered to be the hard problem of consciousness. Indeed, it is held to pose a challenge to physicalism, let alone computationalism.
Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory (Baars 1988, 1997). His brain child IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA's task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual's skills and preferences with the Navy's needs. IDA interacts with Navy databases and communicates with the sailors via natural language e-mail dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996–2001 at Stan Franklin's "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled (see Franklin 1995 and Franklin 2003 for details). While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task."
CLARION posits a two-level representation that explains the distinction between conscious and unconscious mental processes.
CLARION has been successful in accounting for a variety of psychological data. A number of well-known skill learning tasks have been simulated using CLARION that span the spectrum ranging from simple reactive skills to complex cognitive skills. The tasks include serial reaction time (SRT) tasks, artificial grammar learning (AGL) tasks, process control (PC) tasks, the categorical inference (CI) task, the alphabetical arithmetic (AA) task, and the Tower of Hanoi (TOH) task (Sun 2002). Among them, SRT, AGL, and PC are typical implicit learning tasks, very much relevant to the issue of consciousness as they operationalized the notion of consciousness in the context of psychological experiments.
The simulations using CLARION provide detailed, process-based interpretations of experimental data related to consciousness, in the context of a broadly scoped cognitive architecture and a unified theory of cognition. Such interpretations are important for a precise, process-based understanding of consciousness and other aspects of cognition, leading up to better appreciations of the role of consciousness in human cognition (Sun 1999). CLARION also makes quantitative and qualitative predictions regarding cognition in the areas of memory, learning, motivation, meta-cognition, and so on. These predictions either have been experimentally tested already or are in the process of being tested.
Ben Goertzel is pursuing an embodied AGI through the open-source OpenCog project. Current code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, being done at the robotics lab of Hugo de Garis at Xiamen University.
Pentti Haikonen (2003) considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection." Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many, e.g. Freeman (1999) and Cotterill (2003). A low-complexity implementation of the architecture proposed by Haikonen (2003) was reportedly not capable of AC, but did exhibit emotions as expected. See Doan (2009) for a comprehensive introduction to Haikonen's cognitive architecture.
Self-awareness in robots is being investigated by Junichi Takeno [1] at Meiji University in Japan. Takeno is asserting that he has developed a robot capable of discriminating between a self-image in a mirror and any other having an identical image to it [2][3], and this claim has already been reviewed (Takeno, Inaba & Suzuki 2005). Takeno asserts that he first contrived the computational module called a MoNAD, which has a self-aware function, and he then constructed the artificial consciousness system by formulating the relationships between emotions, feelings and reason by connecting the modules in a hierarchy (Igarashi, Takeno 2007). Takeno completed a mirror image cognition experiment using a robot equipped with the MoNAD system. Takeno proposed the Self-Body Theory stating that "humans feel that their own mirror image is closer to themselves than an actual part of themselves." The most important point in developing artificial consciousness or clarifying human consciousness is the development of a function of self awareness, and he claims that he has demonstrated physical and mathematical evidence for this in his thesis (Takeno 2008 [4]). He also demonstrated that robots can study episodes in memory where the emotions were stimulated and use this experience to take predictive actions to prevent the recurrence of unpleasant emotions (Torigoe, Takeno 2009).
Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and claims in his book Impossible Minds: My neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language.[4] Whether this is true remains to be demonstrated and the basic principle stated in Impossible minds—that the brain is a neural state machine—is open to doubt.[5]
The most well-known method for testing machine intelligence is the Turing test and it may be seen as an indirect test for consciousness. Another test, inspired by features of biological systems, is ConsScale, and it has been proposed as a means to measure and characterize the cognitive development of artificial creatures.
Yet there is a very serious objection to the plausibility of a test for consciousness. Tests, as such, are third-person procedures in which data is accessible to independent inquirers. However, qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Accordingly, although various systems may display various signs of behavior correlated with functional consciousness, there is no conceivable way in which third-person procedures can have access to first-person phenomenological features. Ultimately, then, a test of strong versions of AC may be impossible.